96 research outputs found

    Preliminary weight and costs of sandwich panels to distribute concentrated loads

    Get PDF
    Minimum mass honeycomb sandwich panels were sized for transmitting a concentrated load to a uniform reaction through various distances. The form skin gages were fully stressed with a finite element computer code. The panel general stability was evaluated with a buckling computer code labeled STAGS-B. Two skin materials were considered; aluminum and graphite-epoxy. The core was constant thickness aluminum honeycomb. Various panel sizes and load levels were considered. The computer generated data were generalized to allow preliminary least mass panel designs for a wide range of panel sizes and load intensities. An assessment of panel fabrication cost was also conducted. Various comparisons between panel mass, panel size, panel loading, and panel cost are presented in both tabular and graphical form

    From analog to digital

    Get PDF
    Analog-to-digital conversion and its reverse, digital-to-analog conversion, are ubiquitous in all modern electronics, from instrumentation and telecommunication equipment to computers and entertainment. We shall explore the consequences of converting signals between the analog and digital domains and give an overview of the internal architecture and operation of a number of converter types. The importance of analog input and clock signal integrity will be explained and methods to prevent or mitigate the effects of interference will be shown. Examples will be drawn from several manufacturers' datasheets

    Control server for the PS orbit acquisition system Status 2009

    Get PDF
    CERN’s Proton Synchrotron (CPS) has been fitted with a new Trajectory Measurement System (TMS). Analogue signals from forty Beam Position Monitors (BPM) are digitized at 125 MS/s, and then further treated in the digital domain to derive positions of all individual particle bunches on the fly. Large FPGAs are used to handle the digital processing. The system fits in fourteen plug-in modules distributed over three half-width cPCI crates that store data in circular buffers. They are connected to a Linux computer by means of a private Gigabit Ethernet segment. Dedicated server software, running under Linux, knits the system into a coherent whole [1]. The corresponding low-level software using FESA (BPMOPS class) was implemented while respecting the standard interface for beam position measurements. The BPMOPS server publishes values on request after data extraction and conversion from the TMS server. This software is running on a VME Lynx-OS platform and through dedicated electronics it can therefore control the pickup sensitivities also making it a complete system. APIs for hardware experts are also available in order to either calibrate the system or to adjust the different TMS settings needed for each cycle. Running this system in operational mode has already given very good results compared to the old CODD system and some of them will be presented in this documen

    VME-based remote instrument control without ground loops

    Get PDF
    New electronics has been developed for the remote control of the pick-up electrodes at the CERN Proton Synchrotron (PS). Communication between VME-based control computers and remote equipment is via full duplex point-to-point digital data links. Data are sent and received in serial format over simple twisted pairs at a rate of 1 Mbit/s, for distances of up to 300 m. Coupling transformers are used to avoid ground loops. The link hardware consists of a general-purpose VME-module, the 'TRX' (transceiver), containing four FIFO-buffered communication channels, and a dedicated control card for each remote station. Remote transceiver electronics is simple enough not to require micro-controllers or processors. Currently, some sixty pick-up stations of various types, all over the PS Complex (accelerators and associated beam transfer lines) are equipped with the new system. Even though the TRX was designed primarily for communication with pick-up electronics, it could also be used for other purposes, for example to form a local area network

    Digital Beam Trajectory and Orbit System, for the CERN Proton Synchrotron

    Get PDF
    A new trajectory and orbit measurement system using fast signal sampling and digital signal processing in an FPGA is proposed for the CERN PS. The system uses a constant sampling frequency while the beam revolution frequency changes during acceleration. Synchronization with the beam is accomplished through a numerical PLL algorithm. This algorithm is also capable of treating RF gymnastics like bunch splitting or batch compression with the help of external timing signals. Baseline and position calculation are provided in the FPGA code as well. After having implemented the algorithms in C and MatLab and tested them with data from a test run at the PS, they have now been implemented in the FPGA for online use. Results of measurements on a single beam position monitor in the CERN PS and the SIS-18 at GSI will be presented

    Accelerating NBODY6 with Graphics Processing Units

    Full text link
    We describe the use of Graphics Processing Units (GPUs) for speeding up the code NBODY6 which is widely used for direct NN-body simulations. Over the years, the N2N^2 nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time-steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost-effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 percent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction term is calculated using mainly single precision. We also discuss further strategies connected with coordinate and velocity prediction required by the integration scheme. This leaves hard binaries and multiple close encounters which are treated by several regularization methods. The present nbody6-GPU code is well balanced for simulations in the particle range 1042×10510^4-2 \times 10^5 for a dual GPU system attached to a standard PC.Comment: 8 pages, 3 figures, 2 tables, MNRAS accepte

    Measurement of the mean radial position of a lead ion beam in the CERN PS

    Get PDF
    The intensity of the lead ion beam in the PS, nominally 4×108 charges of Pb53+ per bunch, is too low for the closed orbit measurement system. However, for successful acceleration it is sufficient to know the mean radial position (MRP). A system was thus designed for simultaneous acquisition of revolution frequency and magnetic field. The frequency measurement uses a direct digital synthesiser (DDS), phase-locked to the beam signal from a special high-sensitivity pick-up. The magnetic field is obtained from the so-called B-train. From these two values, the MRP is calculated. The precision depends on the frequency measurement and on the accuracy of the value for the magnetic field. Furthermore, exact knowledge of the transition energy is essential. This paper describes the hardware and software developed for the MRP system, and discusses the issue of calibration, with a proton beam, of the B measurement

    Harvesting graphics power for MD simulations

    Get PDF
    We discuss an implementation of molecular dynamics (MD) simulations on a graphic processing unit (GPU) in the NVIDIA CUDA language. We tested our code on a modern GPU, the NVIDIA GeForce 8800 GTX. Results for two MD algorithms suitable for short-ranged and long-ranged interactions, and a congruential shift random number generator are presented. The performance of the GPU's is compared to their main processor counterpart. We achieve speedups of up to 80, 40 and 150 fold, respectively. With newest generation of GPU's one can run standard MD simulations at 10^7 flops/$.Comment: 12 pages, 5 figures. Submitted to Mol. Si

    Analysing Astronomy Algorithms for GPUs and Beyond

    Full text link
    Astronomy depends on ever increasing computing power. Processor clock-rates have plateaued, and increased performance is now appearing in the form of additional processor cores on a single chip. This poses significant challenges to the astronomy software community. Graphics Processing Units (GPUs), now capable of general-purpose computation, exemplify both the difficult learning-curve and the significant speedups exhibited by massively-parallel hardware architectures. We present a generalised approach to tackling this paradigm shift, based on the analysis of algorithms. We describe a small collection of foundation algorithms relevant to astronomy and explain how they may be used to ease the transition to massively-parallel computing architectures. We demonstrate the effectiveness of our approach by applying it to four well-known astronomy problems: Hogbom CLEAN, inverse ray-shooting for gravitational lensing, pulsar dedispersion and volume rendering. Algorithms with well-defined memory access patterns and high arithmetic intensity stand to receive the greatest performance boost from massively-parallel architectures, while those that involve a significant amount of decision-making may struggle to take advantage of the available processing power.Comment: 10 pages, 3 figures, accepted for publication in MNRA

    A pilgrimage to gravity on GPUs

    Get PDF
    In this short review we present the developments over the last 5 decades that have led to the use of Graphics Processing Units (GPUs) for astrophysical simulations. Since the introduction of NVIDIA's Compute Unified Device Architecture (CUDA) in 2007 the GPU has become a valuable tool for N-body simulations and is so popular these days that almost all papers about high precision N-body simulations use methods that are accelerated by GPUs. With the GPU hardware becoming more advanced and being used for more advanced algorithms like gravitational tree-codes we see a bright future for GPU like hardware in computational astrophysics.Comment: To appear in: European Physical Journal "Special Topics" : "Computer Simulations on Graphics Processing Units" . 18 pages, 8 figure
    corecore